IEEE Transactions on Medical Imaging
● Institute of Electrical and Electronics Engineers (IEEE)
All preprints, ranked by how well they match IEEE Transactions on Medical Imaging's content profile, based on 18 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Li, S.; Gao, J.; Kim, C.; Choi, S.; Chen, Q.; Wang, Y.; Wu, S.; Zhang, Y.; Huang, T.; Zhou, Y.; Yao, B.; Yao, Y.; Li, C.
Show abstract
Three-dimensional (3D) handheld photoacoustic tomography typically relies on bulky and expensive external positioning trackers to correct motion artifacts, which severely limits its clinical flexibility and accessibility. To address this challenge, we present PA-SfM, a tracker-free framework that leverages exclusively single-modality photoacoustic data for both sensor pose recovery and high-fidelity 3D reconstruction via differentiable acoustic radiation modeling. Unlike traditional Structure-from-Motion (SfM) methods that formulate pose estimation as a geometry-driven optimization over visual features, PA-SfM integrates the acoustic wave equation into a differentiable programming pipeline. By leveraging a high-performance, GPU-accelerated acoustic radiation kernel, the framework simultaneously optimizes the 3D photoacoustic source distribution and the sensor array pose via gradient descent. To ensure robust convergence in freehand scenarios, we introduce a coarse-to-fine optimization strategy that incorporates geometric consistency checks and rigid-body constraints to eliminate motion outliers. We validated the proposed method through both numerical simulations and in-vivo rat experiments. The results demonstrate that PA-SfM achieves sub-millimeter positioning accuracy and restores high-resolution 3D vascular structures comparable to ground-truth benchmarks, offering a low-cost, softwaredefined solution for clinical freehand photoacoustic imaging. The source code is publicly available at https://github.com/JaegerCQ/PA-SfM.
Adishesha, A. S.; Vanselow, D. J.; La Riviere, P.; Cheng, K. C.; Huang, S. X.
Show abstract
Reduced angular sampling is a key strategy for increasing scanning efficiency of micron-scale computed tomography (micro-CT). Despite boosting throughput, this strategy introduces noise and artifacts due to undersampling. In this work, we present a solution to this issue, by proposing a novel Dense Residual Hierarchical Transformer (DRHT) network to recover high-quality sinograms from 2 x, 4x and 8x undersampled scans. DRHT is trained to utilize limited information available from sparsely angular sampled scans and once trained, it can be applied to recover higher-resolution sinograms from shorter scan sessions. Our proposed DRHT model aggregates the benefits of a hierarchical-multi-scale structure along with the combination of local and global feature extraction through dense residual convolutional blocks and non-overlapping window transformer blocks respectively. We also propose a novel noise-aware loss function named KL-L1 to improve sinogram restoration to full resolution. KL-L1, a weighted combination of pixel-level and distribution-level cost functions, leverages inconsistencies in noise distribution and uses learnable spatial weights to improve the training of the DRHT model. We present ablation studies and evaluations of our method against other state-of-the-art (SOTA) models over multiple datasets. Our proposed DRHT network achieves an average increase in peak signal to noise ratio (PSNR) of 17.73dB and a structural similarity index (SSIM) of 0.161, for 8x upsampling, across the three unique datasets, compared to their respective Bicubic interpolated versions. This novel approach can be utilized to decrease radiation exposure to patients and reduce imaging time for large-scale CT imaging projects.
Hatamizadeh, A.; Hoogi, A.; Sengupta, D.; Lu, W.; Wilcox, B.; Rubin, D.; Terzopoulos, D.
Show abstract
Lesion segmentation is an important problem in computer-assisted diagnosis that remains challenging due to the prevalence of low contrast, irregular boundaries that are unamenable to shape priors. We introduce Deep Active Lesion Segmentation (DALS), a fully automated segmentation framework that leverages the powerful nonlinear feature extraction abilities of fully Convolutional Neural Networks (CNNs) and the precise boundary delineation abilities of Active Contour Models (ACMs). Our DALS framework benefits from an improved level-set ACM formulation with a per-pixel-parameterized energy functional and a novel multiscale encoder-decoder CNN that learns an initialization probability map along with parameter maps for the ACM. We evaluate our lesion segmentation model on a new Multiorgan Lesion Segmentation (MLS) dataset that contains images of various organs, including brain, liver, and lung, across different imaging modalities--MR and CT. Our results demonstrate favorable performance compared to competing methods, especially for small training datasets.
Wang, X.; Cao, J.; Han, K.; Choi, M.; She, Y.; Scheven, U.; Avci, R.; Du, P.; Cheng, L. K.; Natale, M. R. D.; Furness, J. B.; Liu, Z.
Show abstract
Gastrointestinal magnetic resonance imaging (MRI) provides rich spatiotemporal data about the volume and movement of the food inside the stomach, but does not directly report on the muscular activity of the stomach itself. Here we describe a novel approach to characterize the motility of the stomach wall that drives the volumetric changes of the gastric content. In this approach, a surface template was used as a deformable model of the stomach wall. A neural ordinary differential equation (ODE) was optimized to model a diffeomorphic flow that ascribed the deformation of the stomach wall to a continuous biomedical process. Driven by this diffeomorphic flow, the surface template of the stomach progressively changes its shape over time or between conditions, while preserving its topology and manifoldness. We tested this approach with MRI data collected from 10 Sprague Dawley rats under a lightly anesthetized condition. Our proposed approach allowed us to characterize gastric anatomy and motility with a surface coordinate system common to every individual. Anatomical and motility features could be characterized for each individual, and then compared and summarized across individuals for group-level analysis. As a result, high-resolution functional maps were generated to reveal the spatial, temporal, and spectral characteristics of muscle activity as well as the coordination of motor events across different gastric regions. The relationship between muscle thickness and gastric motility was also evaluated throughout the entire stomach wall. Such a structure-function relationship was used to delineate two distinctive functional regions of the stomach. These results demonstrate the efficacy of using GI-MRI to measure and model gastric anatomy and function. This approach described herein is expected to enable non-invasive and accurate mapping of gastric motility throughout the entire stomach for both preclinical and clinical studies.
Hu, P.; Tong, X.; Lin, L.; Wang, L.
Show abstract
Tomographic imaging modalities are described by large system matrices. Sparse sampling and tissue motion degrade system matrix and image quality. Various existing techniques improve the image quality without correcting the system matrices. Here, we compress the system matrices to improve computational efficiency (e.g., 42 times) using singular value decomposition and fast Fourier transform. Enabled by the efficiency, we propose (1) fast sparsely sampling functional imaging by incorporating a densely sampled prior image into the system matrix, which maintains the critical linearity while mitigating artifacts and (2) intra-image nonrigid motion correction by incorporating the motion as subdomain translations into the system matrix and reconstructing the translations together with the image iteratively. We demonstrate the methods in 3D photoacoustic computed tomography with significantly improved image qualities and clarify their applicability to X-ray CT and MRI or other types of imperfections due to the similarities in system matrices.
Martinez, N.; Sapiro, G.; Tannenbaum, A. R.; Hollmann, T. J.; Nadeem, S.
Show abstract
Segmenting noisy multiplex spatial tissue images constitutes a challenging task, since the characteristics of both the noise and the biology being imaged differs significantly across tissues and modalities; this is compounded by the high monetary and time costs associated with manual annotations. It is therefore imperative to build algorithms that can accurately segment the noisy images based on a small number of annotations. Recently techniques to derive such an algorithm from a few scribbled annotations have been proposed, mostly relying on the refinement and estimation of pseudo-labels. Other techniques leverage the success of self-supervised denoising as a parallel task to potentially improve the segmentation objective when few annotations are available. In this paper, we propose a method that augments the segmentation objective via self-supervised multi-channel quantized imputation, meaning that each class of the segmentation objective can be characterized by a mixture of distributions. This approach leverages the observation that perfect pixel-wise reconstruction or denoising of the image is not needed for accurate segmentation, and introduces a self-supervised classification objective that better aligns with the overall segmentation goal. We demonstrate the superior performance of our approach for a variety of cancer datasets acquired with different highly-multiplexed imaging modalities in real clinical settings. Code for our method along with a benchmarking dataset is available at https://github.com/natalialmg/ImPartial.
Hatamizadeh, A.; Terzopoulos, D.; Myronenko, A.
Show abstract
Textures and edges contribute different information to image recognition. Edges and boundaries encode shape information, while textures manifest the appearance of regions. Despite the success of Convolutional Neural Networks (CNNs) in computer vision and medical image analysis applications, predominantly only texture abstractions are learned, which often leads to imprecise boundary delineations. In medical imaging, expert manual segmentation often relies on organ boundaries; for example, to manually segment a liver, a medical practitioner usually identifies edges first and subsequently fills in the segmentation mask. Motivated by these observations, we propose a plug-and-play module, dubbed Edge-Gated CNNs (EG-CNNs), that can be used with existing encoder-decoder architectures to process both edge and texture information. The EG-CNN learns to emphasize the edges in the encoder, to predict crisp boundaries by an auxiliary edge supervision, and to fuse its output with the original CNN output. We evaluate the effectiveness of the EG-CNN with various mainstream CNNs on two publicly available datasets, BraTS 19 and KiTS 19 for brain tumor and kidney semantic segmentation. We demonstrate how the addition of EG-CNN consistently improves segmentation accuracy and generalization performance.
Tolooshams, B.; Lin, L.; Callier, T.; Wang, J.; Pal, S.; Chandrashekar, A.; Rabut, C.; Li, Z.; Blagden, C.; Norman, S. L.; Azizzadenesheli, K.; Liu, C.; Shapiro, M. G.; Andersen, R. A.; Anandkumar, A.
Show abstract
Functional ultrasound imaging (fUSI) is a promising neuroimaging method that infers neural activity by detecting cerebral blood volume changes. It offers high sensitivity and spatial resolution relative to fMRI and is an epidural alternative to electrophysiology for medical and neuroscience applications, including brain-computer interfaces. However, current fUSI methods require hundreds of compounded images and ultrasound pulse emissions, leading to high computational costs, memory demands, and potential probe heating. We propose VARiable Sampling fUSI (VARS-fUSI), the first deep learning fUSI method to allow for different sampling durations and rates during training and inference by using neural operators. VARS-fUSI reconstructs high-quality fUSI images using 10 - 15% of the time or sampling rate needed per image while preserving decodable behavior-correlated signals. Additionally, VARS-fUSI offers efficient finetuning for generalization to new animals and humans. Demonstrated across mouse, monkey, and human data, VARS-fUSI achieves state-of-the-art performance, enhancing imaging efficiency by significantly reducing storage and processing needs.
Chabouh, G.; Pialot, B.; Denis, L.; Dumas, R.; Couture, O.; Muleki Seya, P.; Varray, F.
Show abstract
Ultrasound Localization Microscopy (ULM) has been applied in various preclinical settings and in the clinic to reveal the microvasculature in deep organs. However, most ULM images employ standard Delay-and-Sum (DAS) beamforming. In standard ULM conditions, lengthy acquisition times are required to fully reconstruct small vessels due to the need for spatially isolated microbubbles, resulting in low temporal resolution. When microbubbles are densely packed, localizing a point spread function with significant main and side lobes becomes challenging due to matrix arrays low signal-to-noise ratio and spatial resolution. In this work, we applied adaptive beamforming such as high order DAS known as (pDAS), Coherence Factor (CF), Coherence Factor with Gaussian Filtering (CFGF), and statistical interpretation of beamforming (iMAP) to provide a more complete 3D ULM maps in vitro and in vivo (rat kidney). Specifically, the CF and 1MAP adaptive beamformers achieved higher resolution (32.9 microns and 27.2 microns respectively), as measured by the Fourier Shell Correlation (FSC), compared to the standard DAS beamformer, which had an FSC value of 38.6 microns.
Quan, Y.; Xue, Y.; Zhu, H.
Show abstract
Reconstructing medical images from partial measurements is a key inverse problem in computed tomography (CT), essential for reducing radiation exposure while maintaining diagnostic quality. Conventional supervised learning approaches depend on paired datasets generated under fixed acquisition settings, limiting their adaptability to unseen measurement processes. To overcome this, we propose a fully unsupervised framework based on score-based generative models. Our method learns the prior distribution of high-quality medical images and employs a physics-informed sampling strategy to reconstruct images consistent with both the learned prior and observed measurements. Experiments on multiple CT reconstruction tasks show that our approach achieves comparable or superior performance to existing methods while generalizing robustly across varying measurement conditions. The code is available.
Chen, C.; Huiru, W.; Peilin, G.; Xi, C.; Ren, J.
Show abstract
Clearing Assisted Scattering Tomography (CAST) extends coherent scattering tomography to whole-brain imaging, enabling visualization of fine-scale brain-wide connectivity. As a coherent optical tomography modality, CAST is inherently affected by speckle noise, which degrades image quality and limits quantitative analysis. However, existing speckle reduction methods developed for optical coherence tomography (OCT) are not directly transferable to CAST images due to differences in sample and noise statistics. Here, we present a learning-based cleared-sample speckle reduction network, termed CLEAR Net, specifically designed for CAST imaging, which effectively suppresses speckle noise in whole-brain white matter images while preserving fine structural details. We quantitatively benchmarked CLEAR Net against representative speckle reduction algorithms on CAST datasets and further evaluated its generalizability using publicly available ophthalmic datasets.
You, Q.; Lowerison, M.; Shin, Y.; Chen, X.; Chandra Sekaran, N. V.; Dong, Z.; Llano, D. A.; Anastasio, M. A.; Song, P.
Show abstract
Super-resolution ultrasound microvessel imaging based on ultrasound localization microscopy (ULM) is an emerging imaging modality that is capable of resolving micron-scaled vessels deep into tissue. In practice, ULM is limited by the need for contrast injection, long data acquisition, and computationally expensive post-processing times. In this study, we present a contrast-free super-resolution Doppler (CS Doppler) technique that uses deep generative networks to achieve super-resolution with short data acquisition. The training dataset is comprised of spatiotemporal ultrafast ultrasound signals acquired from in vivo mouse brains, while the testing dataset includes in vivo mouse brain, chicken embryo chorioallantoic membrane (CAM), and healthy human subjects. The in vivo mouse imaging studies demonstrate that CS Doppler could achieve an approximate 2-fold improvement in spatial resolution when compared with conventional power Doppler. In addition, the microvascular images generated by CS Doppler showed good agreement with the corresponding ULM images as indicated by a structural similarity index of 0.7837 and a peak signal-to-noise ratio of 25.52. Moreover, CS Doppler was able to preserve the temporal profile of the blood flow (e.g., pulsatility) that is similar to conventional power Doppler. Finally, the generalizability of CS Doppler was demonstrated on testing data of different tissues using different imaging settings. The fast inference time of the proposed deep generative network also allows CS Doppler to be implemented for real-time imaging. These features of CS Doppler offer a practical, fast, and robust microvascular imaging solution for many preclinical and clinical applications of Doppler ultrasound.
Demirel, O. B.; Zhang, C.; Yaman, B.; Gulle, M.; Shenoy, C.; Leiner, T.; Kellman, P.; Akcakaya, M.
Show abstract
Real-time cine cardiac MRI provides an ECG-free free-breathing alternative to clinical gold-standard ECG-gated breath-hold segmented cine MRI for evaluation of heart function. Real-time cine MRI data acquisition during free breathing snapshot imaging enables imaging of patient cohorts that cannot be imaged with segmented or breath-hold acquisitions, but requires rapid imaging to achieve sufficient spatial-temporal resolutions. However, at high acceleration rates, conventional reconstruction techniques suffer from residual aliasing and temporal blurring, including advanced methods such as compressed sensing with radial trajectories. Recently, deep learning (DL) reconstruction has emerged as a powerful tool in MRI. However, its utility for free-breathing real-time cine MRI has been limited, as database-learning of spatio-temporal correlations with varying breathing and cardiac motion patterns across subjects has been challenging. Zero-shot self-supervised physics-guided deep learning (PG-DL) reconstruction has been proposed to overcome such challenges of database training by enabling subject-specific training. In this work, we adapt zeroshot PG-DL for real-time cine MRI with a spatio-temporal regularization. We compare our method to TGRAPPA, locally low-rank (LLR) regularized reconstruction and database-trained PG-DL reconstruction, both for retrospectively and prospectively accelerated datasets. Results on highly accelerated real-time Cartesian cine MRI show that the proposed method outperforms other reconstruction methods, both visibly in terms of noise and aliasing, and quantitatively.
Sun, Y.; Li, K.; Chen, D.; Hu, Y.; Zhang, S.
Show abstract
Deep learning models based on medical images have made significant strides in predicting treatment outcomes. However, previous methods have primarily concentrated on single time-point images, neglecting the temporal dynamics and changes inherent in longitudinal medical images. Thus, we propose a Transformer-based longitudinal image analysis framework (LOMIA-T) to contrast and fuse latent representations from pre- and post-treatment medical images for predicting treatment response. Specifically, we first design a treatment response- based contrastive loss to enhance latent representation by discerning evolutionary processes across various disease stages. Then, we integrate latent representations from pre- and post-treatment CT images using a cross-attention mechanism. Considering the redundancy in the dual-branch output features induced by the cross-attention mechanism, we propose a clinically interpretable feature fusion strategy to predict treatment response. Experimentally, the proposed framework outperforms several state-of-the-art longitudinal image analysis methods on an in-house Esophageal Squamous Cell Carcinoma (ESCC) dataset, encompassing 170 pre- and post-treatment contrast-enhanced CT image pairs from ESCC patients underwent neoadjuvant chemoradiotherapy. Ablation experiments validate the efficacy of the proposed treatment response-based contrastive loss and feature fusion strategy. The codes will be made available at https://github.com/syc19074115/LOMIA-T.
Zhu, W.; Huang, Y.; Vannan, M. A.; Liu, S.; Xu, D.; Fan, W.; Qian, Z.; Xie, X.
Show abstract
Echocardiography has become routinely used in the diagnosis of cardiomyopathy and abnormal cardiac blood flow. However, manually measuring myocardial motion and cardiac blood flow from echocar-diogram is time-consuming and error-prone. Computer algorithms that can automatically track and quantify myocardial motion and cardiac blood flow are highly sought after, but have not been very successful due to noise and high variability of echocardiography. In this work, we propose a neural multi-scale self-supervised registration (NMSR) method for automated myocardial and cardiac blood flow dense tracking. NMSR incorporates two novel components: 1) utilizing a deep neural net to parameterize the velocity field between two image frames, and 2) optimizing the parameters of the neural net in a sequential multi-scale fashion to account for large variations within the velocity field. Experiments demonstrate that NMSR yields significantly better registration accuracy than the state-of-the-art methods, such as advanced normalization tools (ANTs) and Voxel Morph, for both myocardial and cardiac blood flow dense tracking. Our approach promises to provide a fully automated method for fast and accurate analyses of echocardiograms.
Chato, L.; Sereda, T.
Show abstract
In this paper, we present a novel and efficient framework for cross-modality medical image synthesis, developed for BraSyn-Task 8. Our method combines the fast-sampling capabilities of the Fast-Denoising Diffusion Probabilistic Model (Fast-DDPM) with Discrete Wavelet-Transformed components, as used in Conditional Wavelet Diffusion Models. By reducing the number of denoising steps to 100 and using wavelet-transformed inputs, we accelerate both training and inference and reduce memory usage while preserving high image quality. The framework was trained on the BraTS 2025 dataset, which includes four magnetic resonance imaging (MRI) modalities: T1-weighted, contrast-enhanced T1-weighted (T1c), T2-weighted, and FLAIR. We developed four independent models, each synthesizing one missing modality from the remaining three. Evaluation on the BraSyn 2025 Task 8 public validation set demonstrated competitive performance using standard image metrics: mean squared error, signal-to-noise ratio, and structural similarity index. Our method achieved Third place in the challenge in the final test data, with fast inference times (average 41- 67 seconds per case). To assess clinical relevance, we applied a pretrained nnU-Net segmentation model on the synthesized modalities. Segmentation results yielded high Dice coefficients: 0.877 for the whole tumor, 0.769 for the tumor core, and 0.667 for the enhancing tumor. These results confirm the effectiveness and reliability of our approach for missing-modality synthesis, enabling accurate downstream analysis in high-dimensional medical imaging tasks. Our team in the challenge is USD-2025-Chato-Sereda (Team ID: 3551654).Github link: https://github.com/tsereda/brats-synthesis
Jain, R.
Show abstract
Glioblastoma multiforme is an aggressive brain tumor with the lowest survival rate of any human cancer due to its invasive growth dynamics. These dynamics result in recurrent tumor pockets hidden from medical imaging, which standard radio-treatment and surgical margins fail to cover. Mathematical modeling of tumor growth via partial differential equations (PDE) is well-known; however, it remains unincorporated in clinical practice due to prolonged run-times, inter-patient anatomical variation, and initial conditions that ignore a patients current tumor. This study proposes a glioblastoma multiforme tumor evolution model, GlioMod, that aims to learn spatiotemporal features of tumor concentration and brain geometry for personalized therapeutic planning. A dataset of 6,000 synthetic tumors is generated from real patient anatomies using PDE-based modeling. Our model employs image-to-image regression using a novel encoder-decoder architecture to predict tumor concentration at future states. GlioMod is tested in its simulation of forward tumor growth and reconstruction of patient anatomy on 900 pairs of unseen brain geometries against their corresponding PDE-solved future tumor concentrations. We demonstrate that spatiotemporal context achieved via neural modeling yields tumor evolution predictions personalized to patients and still generalizable to unseen anatomies. Its performance is measured in three areas: (1) regression error rates, (2) quantitative and qualitative tissue agreement, and (3) run-time compared to state-of-the-art numerical solvers. The results demonstrate that GlioMod can predict tumor growth with high accuracy, being 2 orders of magnitude faster and therefore suitable for clinical use. GlioMod is provided as an open-source software package, which includes the synthetic tumor data generated from the patients in our study.
Moazami, S.; Rezvani, S.; Dasgupta, A.; Oberai, A.
Show abstract
Brain magnetic resonance imaging (MRI) is pivotal in diagnosing and monitoring neurological disorders. However, despite their extensive applications, MR images have certain shortcomings. In particular, factors other than the anatomy of brain tissues influence the intensity distribution of voxels in MR images. These factors include hardware, software, magnetic field strength, and acquisition protocol. This inconsistency poses challenges in multi-site neuroimaging studies, where images are obtained from various devices with minimal control over acquisition parameters. Image harmonization algorithms aim to eliminate non-biological characteristics in MR images through various approaches, including converting images from multiple sites into a format resembling that of a designated target site. Among image harmonization methods, those relying on deep learning algorithms have gained significant attention recently. Nevertheless, certain aspects of deep learning-based image harmonization remain unexplored, notably the integration of probabilistic deep generative models to transform the distribution of MR images to a desired distribution. Inspired by this, we introduced a feature preserving conditional generative adversarial network (FP-cGAN) that converts images from multiple origins into the format of a target site while preserving anatomical features by imposing a novel regularizing constraint. We conduct our experiments on MR images from the SRPBS dataset, which comprises unpaired images in addition to paired (traveling subjects) images from multiple sites. We utilize the unpaired data for training our models and the paired data for evaluation. Furthermore, we compare our results with cycleGAN and histogram matching, two widely used image harmonization methods. Our experiments reveal that our approach surpasses the other techniques. Article HighlightsO_LIWe introduce an FP-cGAN model to harmonize multi-site MRI scans into a target site. C_LIO_LIA novel anatomical feature invariant constraint maintains tissue integrity. C_LIO_LIThe models are trained in an unpaired setting without requiring traveling subjects. C_LIO_LIWe evaluate harmonization performance using paired traveling subjects. C_LI
Maidu, B.; Gonzalo, A.; Guerrero-Hurtado, M.; Bargellini, C.; Martinez-Legazpi, P.; Bermejo, J.; Contijoch, F.; Flores, O.; Garcia-Villalba, M.; McVeigh, E.; Kahn, A.; del Alamo, J. C.
Show abstract
Atrial fibrillation (AF) promotes blood stasis and thrombus formation, most often within the left atrial appendage (LAA), and can lead to stroke or transient ischemic attack (TIA). Time-resolved contrast-enhanced computed tomography (4D CT) captures left atrial (LA) opacification and washout, but it does not directly provide quantitative stasis metrics such as blood residence time. Patient-specific computational fluid dynamics (CFD) can quantify LA/LAA residence time, yet routine clinical use is limited by computational cost and sensitivity to patient-specific boundary conditions. Here, we present two complementary approaches to infer time-resolved 3D residence time fields directly from contrast dynamics. First, a physics-informed neural network (PINN) treats contrast as a passive scalar and jointly reconstructs velocity and residence time by enforcing the incompressible Navier-Stokes equations and transport equations for contrast concentration and residence time in moving, patient-specific LA anatomies. Second, an indicator dilution theory (IDT) formulation computes voxelwise, time-resolved residence time maps from contrast time curves alone by constructing a PV-referenced impulse response and modeling transport with a tank-in-series model with spatially dependent parameters. Both methods are benchmarked against patient-specific CFD in six cases spanning diverse LA function, including three patients with TIA or thrombus in the LAA and three patients free of events. Both approaches reproduce expected spatial and temporal trends, with higher residence time in the distal LAA and higher LAA residence time in cases with TIA or thrombus. IDT demonstrates the closest agreement with CFD across the full range of residence times and produces maps in seconds, facilitating clinical translation. In contrast, the PINN additionally recovers phase-dependent atrial flow structures, but tends to smooth and underestimate the highest residence-time regions and requires hours of training. Together, these results support a scalable workflow in which IDT enables rapid stasis screening from contrast CT, and PINNs provide a complementary pathway for detailed, patient-specific hemodynamic inference when full-field flow information is needed.
Bertolo, A.; Ferrier, J.; Demeulenaere, O.; Dizeux, A.; Delaporte, T.; Osmanski, B.; Tanter, M.; Pernot, M.; Deffieux, T.
Show abstract
Brain perfusion relies on a complex vascular network of arteries, veins, and capillaries to meet its constant demand for oxygen and nutrients. Disruption of this microvascular system is a hallmark of many neurological disorders, including small vessel disease, stroke, and brain tumors. As such, high-resolution in vivo imaging of cerebral microvascular flow and structure remains critical to understanding these pathologies. Among them, Ultrasound Localization Microscopy (ULM) allows noninvasive imaging of microvascular network at subwavelength resolution using injected microbubbles, but the approach remains mainly limited to 2D imaging with few volumetric implementations. In this study, we explore in vivo transcranial 3D ULM of the mouse brain using Row-Column Arrays (RCA) and introduce an analysis framework to build a flow-directed vascular graph from the ULM microbubble tracking data, allowing to differentiate between subgraphs of artery-like and vein-like vascular segments. Combined with Allen-based and Radius-based segmentations, we extract metrics including cerebral blood flow (CBF), microbubble (MB) velocity, flowrate, CBV fraction, vascular segment length and tortuosity in sixty-six different vascular classes. This high-sensitivity framework enables in vivo microvascular imaging and quantification in mice and provides a scalable platform for preclinical neurovascular studies in health and disease.